Skip to content

fix: normalize tool results to strings for OpenAI API compatibility#409

Open
Ayaan Ahmad (czarflix) wants to merge 1 commit intolangchain-ai:mainfrom
czarflix:fix/issue-34669-openai-serialization
Open

fix: normalize tool results to strings for OpenAI API compatibility#409
Ayaan Ahmad (czarflix) wants to merge 1 commit intolangchain-ai:mainfrom
czarflix:fix/issue-34669-openai-serialization

Conversation

@czarflix
Copy link

🐛 Bug Fix

Issue

Addresses langchain-ai/langchain#34669
Users reported receiving 400 BadRequestError from OpenAI/Azure Chat Models when using MCP tools. The error Input should be a valid dictionary or instance of Content occurs because the adapter was returning tool content as a list[dict] (e.g., [{"type": "text", "text": "..."}]).
Root Cause: OpenAI's Chat Completion API strictly requires tool_message content to be a string (not an array of content blocks).

The Fix

This PR normalizes MCP tool results into a string format compatible with langchain-openai:

  1. Single Text Block → Returns plain str (e.g., "Result")
  2. Multiple Text Blocks → Returns newline-joined str (e.g., "Line 1\nLine 2")
  3. Mixed/Complex Content (e.g., images) → Returns JSON-serialized str (e.g., '[{"type":"image",...}]')

Compatibility Matrix

Provider Status Notes
OpenAI / Azure Fixed Resolves the 400 Crash.
Anthropic Compatible Model parses the JSON-serialized string content.
Google Vertex Compatible Uses string fallback.

Verification

  • Reproduction: Confirmed crash locally using the list[dict] format.
  • Fix Validation: Verified string output passes to OpenAI without error.
  • Unit Tests: Updated tests/test_tools.py. Note: Previous tests asserted list output (the buggy behavior). I rewrote these assertions to expect the new normalized str output.

Change Impact

This PR changes the return type of _convert_call_tool_result from list[dict] to str.
Why this is NOT a breaking change:

  1. The previous list[dict] format caused crashes with OpenAI (the primary consumer).
  2. No working code could depend on the crashing format.
  3. Legacy Support: The raw MCPToolArtifact is still preserved and returned if downstream users need the original objects.
    Proposed Release: Patch (Bug Fix)

Maintainer Note: This issue is tracked in the main langchain repo (#34669). Since cross-repo auto-close is unreliable, I can manually update that issue once this is merged.

Addresses langchain-ai/langchain#34669

OpenAI's Chat Completion API strictly requires tool_message content to be
strings. The previous list[dict] format caused 400 BadRequestError crashes.

Changes:
- Single text blocks -> plain string
- Multiple text blocks -> newline-joined string
- Mixed/Complex content -> JSON-serialized string
- Updated tests to assert string output

This restores functionality for OpenAI while maintaining compatibility
with Anthropic (via string parsing).
@czarflix
Copy link
Author

Furkan Zeki ÖZYURT (@fzozyurt) thanks for the review. Could a maintainer approve the workflows? Tagging Bagatur (@baskaryan) for visibility.

@blanketspy99
Copy link

What if MCP returns an image which is expected to be understood by LLM for feature extraction? It cannot read bytes/base64 data. I guess openai responses can interpret list[dict] supporting multimodal capabilities.

@czarflix
Copy link
Author

What if MCP returns an image which is expected to be understood by LLM for feature extraction? It cannot read bytes/base64 data. I guess openai responses can interpret list[dict] supporting multimodal capabilities.

Yeah so Chat Completions API doesn't actually support images in tool messages - it throws a 400 if you try. This fix just makes sure we don't crash.

For image understanding you'd need the Responses API which langchain-openai doesn't use by default yet. Could be a follow-up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants